A musician may begin learning a new piece and find themselves lost in the weeds, fumbling while thinking about fingering options, phrasing decisions, and micro-adjustments to dynamics. A golfer may end up actually lost in the weeds after needlessly obsessing over specialized techniques, swing plane, and ball flights. And the manager rolling out a new AI workflow? Their simple automation idea can devolve into scattershot attempts at broad goals, governance concerns, and vague existential questions about productivity.
Companies in most industries are investing heavily in artificial intelligence: 88% of companies reporting regular AI use. Yet many leaders report familiar frustrations. AI adoption stalls. Performance gains plateau. Employees experiment with new tools but don't integrate them deeply into how work actually gets done, leaving executives increasingly concerned about ROI. Erin Eatough is a co-founder and chief science officer at Fractional Insights and professor of organizational psychology at Michigan State University.
Trust has fast become one of the central questions in every serious conversation about AI. Not capabilities. Not efficiency. Trust. If customers don't trust how companies deploy AI, they'll walk away. If employees don't trust it, they'll disengage. If enterprises don't trust their AI providers, they won't adopt. A recent global KPMG study found that while two-thirds of people now use AI regularly, fewer than half say they're willing to trust it.
A year or so ago, most legal departments were still testing. AI pilots. Workflow trials. Small process experiments. Everyone was learning cautiously. The stakes were relatively low, and the work was labeled "innovation," which made imperfection forgivable. Then something shifted. Those same pilots became part of day-to-day delivery, and the business started relying on them. Sometimes intentionally, because early results looked good.
On a personal basis, that means people using AI services want to be able to veto big decisions such as making payments, accessing or using contact details, changing account details, placing orders, or even just seeking clarity during a decision-making process. Extend this way of thinking to the working environment and the resistance is likely to be equally strong in professional settings.
Image Credit: AI Generated Image I admire artists and industrial designers who challenge assumptions. Ross Lovegrove is one of them. If you've never heard of him, he is one of the most visionary creators in the world, and designs all sorts of devices, including door handles, computers, fragrance bottles, and concept cars. In an article in the popular design magazine Wallpaper, he claims that the potential of working with AI is utopian. That says a lot coming from someone considered by many as a futurist.
AI is no longer optional at banks. The road map, and showing how it pays off, is the hard part. Alexandra Mousavizadeh, the cofounder and co-CEO of Evident, which tracks AI use in the financial industry, said some AI capabilities are "table stakes" for banks at this point - think back-office functions like reviewing legal documents and routine onboarding tasks. Beyond that, though, Mousavizadeh banks need to double down on their "competitive edge."
"It's not that Gen Z has confidence necessarily in the market, but they do have a confidence in their ability to adapt," Kyle M.K., Indeed's senior strategy advisor, tells Axios. "This is a group that - for a majority of their lives - they've seen a lot of disruption." "They just have a lot of confidence in themselves to plan accordingly," he adds, "especially as we go through some of this transformative change that we're seeing with AI and the economy."